Goto

Collaborating Authors

 exact reconstruction



On Contrastive Representations of Stochastic Processes

Neural Information Processing Systems

Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series. Typical methods rely on exact reconstruction of observations, but this approach breaks down as observations become high-dimensional or noise distributions become complex. To address this, we propose a unifying framework for learning contrastive representations of stochastic processes (CReSP) that does away with exact reconstruction. We dissect potential use cases for stochastic process representations, and propose methods that accommodate each. Empirically, we show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes. Our methods tolerate noisy high-dimensional observations better than traditional approaches, and the learned representations transfer to a range of downstream tasks.



On Contrastive Representations of Stochastic Processes

Neural Information Processing Systems

Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series. Typical methods rely on exact reconstruction of observations, but this approach breaks down as observations become high-dimensional or noise distributions become complex. To address this, we propose a unifying framework for learning contrastive representations of stochastic processes (CReSP) that does away with exact reconstruction. We dissect potential use cases for stochastic process representations, and propose methods that accommodate each. Empirically, we show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.


On Contrastive Representations of Stochastic Processes

Neural Information Processing Systems

Learning representations of stochastic processes is an emerging problem in machine learning with applications from meta-learning to physical object models to time series. Typical methods rely on exact reconstruction of observations, but this approach breaks down as observations become high-dimensional or noise distributions become complex. To address this, we propose a unifying framework for learning contrastive representations of stochastic processes (CReSP) that does away with exact reconstruction. We dissect potential use cases for stochastic process representations, and propose methods that accommodate each. Empirically, we show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.


Revisit to the Inverse Exponential Radon Transform

You, Jason

arXiv.org Machine Learning

This revisit gives a survey on the analytical methods for the inverse exponential Radon transform which has been investigated in the past three decades from both mathematical interests and medical applications such as nuclear medicine emission imaging. The derivation of the classical inversion formula is through the recent argument developed for the inverse attenuated Radon transform. That derivation allows the exponential parameter to be a complex constant, which is useful to other applications such as magnetic resonance imaging and tensor field imaging. The survey also includes the new technique of using the finite Hilbert transform to handle the exact reconstruction from 180 degree data. Special treatment has been paid on two practically important subjects. One is the exact reconstruction from partial measurements such as half-scan and truncated-scan data, and the other is the reconstruction from diverging-beam data. The noise propagation in the reconstruction is touched upon with more heuristic discussions than mathematical inference. The numerical realizations of several classical reconstruction algorithms are included. In the conclusion, several topics are discussed for more investigations in the future.